9 research outputs found

    Comparative analysis of two asynchronous parallelization variants for a multi-objective coevolutionary solver.

    Get PDF
    We describe and compare two steady state asynchronous parallelization variants for DECMO2++, a recently proposed multi-objective coevolutionary solver that generally displays a robust run-time convergence behavior. The two asynchronous variants were designed as trade-offs that maintain only two of the three important synchronized interactions / constraints that underpin the (generation-based) DECMO2++ coevolutionary model. A thorough performance evaluation on a test set that aggregates 31 standard benchmark problems shows that while both parallelization options are able to generally preserve the competitive convergence behavior of the baseline coevolutionary solver, the better parallelization choice is to prioritize accurate run-time search adaptation decisions over the ability to perform equidistant fitness sharing

    Exploring representations for optimising connected autonomous vehicle routes in multi-modal transport networks using evolutionary algorithms.

    Get PDF
    The past five years have seen rapid development of plans and test pilots aimed at introducing connected and autonomous vehicles (CAVs) in public transport systems around the world. While self-driving technology is still being perfected, public transport authorities are increasingly interested in the ability to model and optimize the benefits of adding CAVs to existing multi-modal transport systems. Using a real-world scenario from the Leeds Metropolitan Area as a case study, we demonstrate an effective way of combining macro-level mobility simulations based on open data with global optimisation techniques to discover realistic optimal deployment strategies for CAVs. The macro-level mobility simulations are used to assess the quality of a potential multi-route CAV service by quantifying geographic accessibility improvements using an extended version of Dijkstra's algorithm on an abstract multi-modal transport network. The optimisations were carried out using several popular population-based optimisation algorithms that were combined with several routing strategies aimed at constructing the best routes by ordering stops in a realistic sequence

    Autonomous supervision and optimization of product quality in a multi-stage manufacturing process based on self-adaptive prediction models.

    Get PDF
    In modern manufacturing facilities, there are basically two essential phases for assuring high production quality with low (or even zero) defects and waste in order to save costs for companies. The first phase concerns the early recognition of potentially arising problems in product quality, the second phase concerns proper reactions upon the recognition of such problems. In this paper, we address a holistic approach for handling both issues consecutively within a predictive maintenance framework at an on-line production system. Thereby, we address multi-stage functionality based on (i) data-driven forecast models for (measure-able) product quality criteria (QCs) at a latter stage, which are established and executed through process values (and their time series trends) recorded at an early stage of production (describing its progress), and (ii) process optimization cycles whose outputs are suggestions for proper reactions at an earlier stage in the case of forecasted downtrends or exceeds of allowed boundaries in product quality. The data-driven forecast models are established through a high-dimensional batch time-series modeling problem. In this, we employ a non-linear version of PLSR (partial least squares regression) by coupling PLS with generalized Takagi–Sugeno fuzzy systems (termed as PLS-fuzzy). The models are able to self-adapt over time based on recursive parameters adaptation and rule evolution functionalities. Two concepts for increased flexibility during model updates are proposed, (i) a dynamic outweighing strategy of older samples with an adaptive update of the forgetting factor (steering forgetting intensity) and (ii) an incremental update of the latent variable space spanned by the directions (loading vectors) achieved through PLS; the whole model update approach is termed as SAFM-IF (self-adaptive forecast models with increased flexibility). Process optimization is achieved through multi-objective optimization using evolutionary techniques, where the (trained and updated) forecast models serve as surrogate models to guide the optimization process to Pareto fronts (containing solution candidates) with high quality. A new influence analysis between process values and QCs is suggested based on the PLS-fuzzy forecast models in order to reduce the dimensionality of the optimization space and thus to guarantee high(er) quality of solutions within a reasonable amount of time (→ better usage in on-line mode). The methodologies have been comprehensively evaluated on real on-line process data from a (micro-fluidic) chip production system, where the early stage comprises the injection molding process and the latter stage the bonding process. The results show remarkable performance in terms of low prediction errors of the PLS-fuzzy forecast models (showing mostly lower errors than achieved by other model architectures) as well as in terms of Pareto fronts with individuals (solutions) whose fitness was close to the optimal values of three most important target QCs (being used for supervision): flatness, void events and RMSEs of the chips. Suggestions could thus be provided to experts/operators how to best change process values and associated machining parameters at the injection molding process in order to achieve significantly higher product quality for the final chips at the end of the bonding process

    On-line anomaly detection with advanced independent component analysis of multi-variate residual signals from causal relation networks.

    Get PDF
    Anomaly detection in todays industrial environments is an ambitious challenge to detect possible faults/problems which may turn into severe waste during production, defects, or systems components damage, at an early stage. Data-driven anomaly detection in multi-sensor networks rely on models which are extracted from multi-sensor measurements and which characterize the anomaly-free reference situation. Therefore, significant deviations to these models indicate potential anomalies. In this paper, we propose a new approach which is based on causal relation networks (CRNs) that represent the inner causes and effects between sensor channels (or sensor nodes) in form of partial sub-relations, and evaluate its functionality and performance on two distinct production phases within a micro-fluidic chip manufacturing scenario. The partial relations are modeled by non-linear (fuzzy) regression models for characterizing the (local) degree of influences of the single causes on the effects. An advanced analysis of the multi-variate residual signals, obtained from the partial relations in the CRNs, is conducted. It employs independent component analysis (ICA) to characterize hidden structures in the fused residuals through independent components (latent variables) as obtained through the demixing matrix. A significant change in the energy content of latent variables, detected through automated control limits, indicates an anomaly. Suppression of possible noise content in residuals—to decrease the likelihood of false alarms—is achieved by performing the residual analysis solely on the dominant parts of the demixing matrix. Our approach could detect anomalies in the process which caused bad quality chips (with the occurrence of malfunctions) with negligible delay based on the process data recorded by multiple sensors in two production phases: injection molding and bonding, which are independently carried out with completely different process parameter settings and on different machines (hence, can be seen as two distinct use cases). Our approach furthermore i.) produced lower false alarm rates than several related and well-known state-of-the-art methods for (unsupervised) anomaly detection, and ii.) also caused much lower parametrization efforts (in fact, none at all). Both aspects are essential for the useability of an anomaly detection approach

    Potential identification and industrial evaluation of an integrated design automation workflow.

    Get PDF
    Purpose - The paper aims to raise awareness in the industry of design automation tools, especially in early design phases, by demonstrating along a case study the seamless integration of a prototypically implemented optimization, supporting design space exploration in the early design phase and an in operational use product configurator, supporting the drafting and detailing of the solution predominantly in the later design phase. Design/methodology/approach - Based on the comparison of modeled as-is and to-be processes of ascent assembly designs with and without design automation tools, an automation roadmap is developed. Using qualitative and quantitative assessments, the potentials and benefits, as well as acceptance and usage aspects, are evaluated. Findings - Engineers tend to consider design automation for routine tasks. Yet, prototypical implementations support the communication and identification of the potential for the early stages of the design process to explore solution spaces. In this context, choosing from and interactively working with automatically generated alternative solutions emerged as a particular focus. Translators, enabling automatic downstream propagation of changes and thus ensuring consistency as to change management were also evaluated to be of major value. Research limitations/implications - A systematic validation of design automation in design practice is presented. For generalization, more case studies are needed. Further, the derivation of appropriate metrics needs to be investigated to normalize validation of design automation in future research. Practical implications - Integration of design automation in early design phases has great potential for reducing costs in the market launch. Prototypical implementations are an important ingredient for potential evaluation of actual usage and acceptance before implementing a live system. Originality/value - There is a lack of systematic validation of design automation tools supporting early design phases. In this context, this work contributes a systematically validated industrial case study. Early design-phases-support technology transfer is important because of high leverage potential

    Explaining A Staff Rostering Problem By Mining Trajectory Variance Structures

    No full text
    The use of Artificial Intelligence-driven solutions in domains involving end-user interaction and cooperation has been continually growing. This has also lead to an increasing need to communicate crucial information to end-users about algorithm behaviour and the quality of solutions. In this paper, we apply our method of search trajectory mining through decomposition to the solutions created by a Genetic Algorithm-a non-deterministic, population-based metaheuristic. We complement this method with the use of One-Way ANOVA statistical testing to help identify explanatory features found in the search trajectories-subsets of the set of optimization variables having both high and low influence on the search behaviour of the GA and solution quality. This allows us to highlight these to an end-user to allow for greater flexibility in solution selection. We demonstrate the techniques on a real-world staff rostering problem and show how, together, they identify the personnel who are critical to the optimality of the rosters being created.Output Status: Forthcomin

    Explaining A Staff Rostering Problem By Mining Trajectory Variance Structures

    No full text
    The use of Artificial Intelligence-driven solutions in domains involving end-user interaction and cooperation has been continually growing. This has also lead to an increasing need to communicate crucial information to end-users about algorithm behaviour and the quality of solutions. In this paper, we apply our method of search trajectory mining through decomposition to the solutions created by a Genetic Algorithm-a non-deterministic, population-based metaheuristic. We complement this method with the use of One-Way ANOVA statistical testing to help identify explanatory features found in the search trajectories-subsets of the set of optimization variables having both high and low influence on the search behaviour of the GA and solution quality. This allows us to highlight these to an end-user to allow for greater flexibility in solution selection. We demonstrate the techniques on a real-world staff rostering problem and show how, together, they identify the personnel who are critical to the optimality of the rosters being created

    Self-adaptive evolving forecast models with incremental PLS space updating for on-line prediction of micro-fluidic chip quality

    No full text
    An important predictive maintenance task in modern production systems is to predict the quality of products in order to be able to intervene at an early stage to avoid faults and waste. Here, we address the prediction of the most important quality criteria in micro-fluidics chips: the flatness and critical size of the chips (in the form of RMSE values) and several transmission characteristics. Due to semi-manual inspection, these quality criteria are typically measured only intermittently. This leads to a high-dimensional batch process modeling problem with the goal of predicting chip quality based on the trends in these process values (time series). We apply time-series based transformation for dimension reduction to the lagged time-series space using of partial least squares (PLS), and combine this with a generalized form of Takagi–Sugeno(TS) fuzzy systems to obtain a non-linear PLS forecast model (termed as PLS-fuzzy). The rule consequent functions are robustly estimated by a weighted regularization scheme based on the idea of the elastic net approach. To address particular system dynamics over time, we propose dynamic updating of the non-linear PLS-fuzzy models using new on-line time-series data, with the options 1.) adapt and evolve the rule base on the fly, 2.) smoothly down-weight older samples to increase flexibility of the fuzzy models, and 3.) update the PLS space by incrementally adapting the loading vectors, where processing is achieved in a single-pass stream mining manner. We call our method IPLS-GEFS (incremental PLS combined with generalized evolving fuzzy systems). We applied our predictive modeling approach to data from on-line microfluidic chip production over a time period of about 6 months (July to December 2016). The results show that there is significant non-linearity in the predictive modeling problem, as the non-linear PLS-fuzzy modeling approach significantly outperformed classical PLS for most of the targets (quality criteria). Furthermore, it is important to update the models on the fly with incremental updating of the PLS space and/or with down-weighting older samples, as this significantly decreased the accumulated error trends of the prediction models compared to conventional updating. Reliable predictions of flatness quality (with around 10% error) and of RMSE values and transmissions (with around 15% errors) can be achieved with prediction horizons of up to 4 to 5 h into the future
    corecore